首页> 外文OA文献 >Multitask learning of context-dependent targets in deep neural network acoustic models
【2h】

Multitask learning of context-dependent targets in deep neural network acoustic models

机译:深度神经网络声学模型中的上下文相关目标的多任务学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

This paper investigates the use of multitask learning to improve context-dependent deep neural network (DNN) acoustic models. The use of hybrid DNN systems with clustered triphone targets is now standard in automatic speech recognition. However, we suggest that using a single set of DNN targets in this manner may not be the most effective choice, since the targets are the result of a somewhat arbitrary clustering process that may not be optimal for discrimination. We propose to remedy this problem through the addition of secondary tasks predicting alternative content-dependent or context-independent targets. We present a comprehensive set of experiments on a lecture recognition task showing that DNNs trained through multitask learning in this manner give consistently improved performance compared to standard hybrid DNNs. The technique is evaluated across a range of data and output sizes. Improvements are seen when training uses the cross entropy criterion and also when sequence training is applied.
机译:本文研究了多任务学习的使用,以改善上下文相关的深度神经网络(DNN)声学模型。如今,在自动语音识别中,将具有簇状三音目标的混合DNN系统作为标准配置。但是,我们建议以这种方式使用一组DNN目标可能不是最有效的选择,因为目标是某种程度的任意聚类过程的结果,该过程对于区分可能不是最佳的。我们建议通过添加辅助任务来解决此问题,这些辅助任务可预测替代性的依赖内容或上下文的目标。我们提供了一套关于演讲识别任务的综合实验,表明与标准的混合DNN相比,以这种方式通过多任务学习训练的DNN能够始终如一地提高性能。该技术是在一系列数据和输出大小上进行评估的。当训练使用交叉熵准则以及应用序列训练时,可以看到改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号